Zala County
A Communication-Latency-Aware Co-Simulation Platform for Safety and Comfort Evaluation of Cloud-Controlled ICVs
Zhao, Yongqi, Zhang, Xinrui, Mihalj, Tomislav, Schabauer, Martin, Putzer, Luis, Reichmann-Blaga, Erik, Boronyák, Ádám, Rövid, András, Soós, Gábor, Zhang, Peizhi, Xiong, Lu, Hu, Jia, Eichberger, Arno
Testing cloud-controlled intelligent connected vehicles (ICVs) requires simulation environments that faithfully emulate both vehicle behavior and realistic communication latencies. This paper proposes a latency-aware co-simulation platform integrating CarMaker and Vissim to evaluate safety and comfort under real-world vehicle-to-cloud (V2C) latency conditions. Two communication latency models, derived from empirical 5G measurements in China and Hungary, are incorporated and statistically modeled using Gamma distributions. A proactive conflict module (PCM) is proposed to dynamically control background vehicles and generate safety-critical scenarios. The platform is validated through experiments involving an exemplary system under test (SUT) across six testing conditions combining two PCM modes (enabled/disabled) and three latency conditions (none, China, Hungary). Safety and comfort are assessed using metrics including collision rate, distance headway, post-encroachment time, and the spectral characteristics of longitudinal acceleration. Results show that the PCM effectively increases driving environment criticality, while V2C latency primarily affects ride comfort. These findings confirm the platform's effectiveness in systematically evaluating cloud-controlled ICVs under diverse testing conditions.
- North America > United States (0.46)
- Europe > Austria > Styria > Graz (0.06)
- Europe > Hungary > Budapest > Budapest (0.05)
- (10 more...)
- Transportation > Ground > Road (1.00)
- Information Technology (1.00)
- Transportation > Infrastructure & Services (0.68)
- (2 more...)
Hungary and AI: efforts and opportunities in comparison with Singapore
The study assesses Hungary's National AI Strategy and its implementation through the analysis of strategic documents, publicly available financial records, and expert interviews with the Hungarian AI Coalition President and Chief Strategic Advisor to the Government Commissioner for AI. 22 goals from Hungary's strategy were evaluated through conceptual, governance, temporal, and financial dimensions before being benchmarked against Singapore's National AI Strategies (NAIS 1.0 and NAIS 2.0). Key findings include an estimated total of EUR 4.65 billion in AI-related public investment in Hungary. Openly available financial data was found for only half of the evaluated goals, and just three projects made up 98\% of all documented funding. The research also reveals Hungary's implementation challenges, including fragmented execution following ministerial reorganizations and the absence of designated biennial reviews since 2020. Furthermore, the paper provides targeted recommendations for Hungary's forthcoming AI strategy, drawing on Singapore's framework as a reference point. These include adapting to the era of large language models, restructuring the existing triple helix network to foster more effective dialogue and advocacy, and positioning the country as an East-West bridge for automotive AI experimentation.
- Asia > Singapore (0.76)
- Europe > United Kingdom (0.28)
- Europe > Hungary > Budapest > Budapest (0.05)
- (13 more...)
- Research Report (1.00)
- Instructional Material (0.92)
- Personal > Interview (0.48)
- Transportation > Ground > Road (1.00)
- Law (1.00)
- Government > Regional Government > Europe Government (1.00)
- (8 more...)
OpenHuEval: Evaluating Large Language Model on Hungarian Specifics
Yang, Haote, Wei, Xingjian, Wu, Jiang, Ligeti-Nagy, Noémi, Sun, Jiaxing, Wang, Yinfan, Yang, Zijian Győző, Gao, Junyuan, Wang, Jingchao, Jiang, Bowen, Wang, Shasha, Yu, Nanjun, Zhang, Zihao, Hong, Shixin, Liu, Hongwei, Li, Wei, Zhang, Songyang, Lin, Dahua, Wu, Lijun, Prószéky, Gábor, He, Conghui
We introduce OpenHuEval, the first benchmark for LLMs focusing on the Hungarian language and specifics. OpenHuEval is constructed from a vast collection of Hungarian-specific materials sourced from multiple origins. In the construction, we incorporated the latest design principles for evaluating LLMs, such as using real user queries from the internet, emphasizing the assessment of LLMs' generative capabilities, and employing LLM-as-judge to enhance the multidimensionality and accuracy of evaluations. Ultimately, OpenHuEval encompasses eight Hungarian-specific dimensions, featuring five tasks and 3953 questions. Consequently, OpenHuEval provides the comprehensive, in-depth, and scientifically accurate assessment of LLM performance in the context of the Hungarian language and its specifics. We evaluated current mainstream LLMs, including both traditional LLMs and recently developed Large Reasoning Models. The results demonstrate the significant necessity for evaluation and model optimization tailored to the Hungarian language and specifics. We also established the framework for analyzing the thinking processes of LRMs with OpenHuEval, revealing intrinsic patterns and mechanisms of these models in non-English languages, with Hungarian serving as a representative example. We will release OpenHuEval at https://github.com/opendatalab/OpenHuEval .
- North America > United States (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- Europe > Ukraine (0.04)
- (8 more...)
- Government > Regional Government (0.46)
- Education > Educational Setting (0.46)
AdaptiVocab: Enhancing LLM Efficiency in Focused Domains through Lightweight Vocabulary Adaptation
Nakash, Itay, Calderon, Nitay, David, Eyal Ben, Hoffer, Elad, Reichart, Roi
Large Language Models (LLMs) have shown impressive versatility as general purpose models. However, their broad applicability comes at a high-cost computational overhead, particularly in auto-regressive decoding where each step requires a forward pass. In domain-specific settings, general-purpose capabilities are unnecessary and can be exchanged for efficiency. In this work, we take a novel perspective on domain adaptation, reducing latency and computational costs by adapting the vocabulary to focused domains of interest. We introduce AdaptiVocab, an end-to-end approach for vocabulary adaptation, designed to enhance LLM efficiency in low-resource domains. AdaptiVocab can be applied to any tokenizer and architecture, modifying the vocabulary by replacing tokens with domain-specific n-gram-based tokens, thereby reducing the number of tokens required for both input processing and output generation. AdaptiVocab initializes new n-token embeddings using an exponentially weighted combination of existing embeddings and employs a lightweight fine-tuning phase that can be efficiently performed on a single GPU. We evaluate two 7B LLMs across three niche domains, assessing efficiency, generation quality, and end-task performance. Our results show that AdaptiVocab reduces token usage by over 25% without compromising performance
- North America > United States > Florida > Miami-Dade County > Miami (0.14)
- Europe > Austria > Vienna (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- (20 more...)